A memory gradient method without line search for unconstrained optimization
نویسندگان
چکیده
Memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and subsequently extended by Cragg and Levy (1969). Recently Narushima and Yabe (2006) proposed a new memory gradient method which generates a descent search direction for the objective function at every iteration and converges globally to the solution if the Wolfe conditions are satisfied within the line search strategy. On the other hand, Sun and Zhang (2001) proposed a particular choice of step size, and they applied it to the conjugate gradient method. In this paper, we apply the choice of the step size proposed by Sun and Zhang to the memory gradient method proposed by Narushima and Yabe and establish its global convergence. AMS 2000 Mathematics Subject Classification. 90C06, 90C30, 65K05.
منابع مشابه
A new hybrid conjugate gradient algorithm for unconstrained optimization
In this paper, a new hybrid conjugate gradient algorithm is proposed for solving unconstrained optimization problems. This new method can generate sufficient descent directions unrelated to any line search. Moreover, the global convergence of the proposed method is proved under the Wolfe line search. Numerical experiments are also presented to show the efficiency of the proposed algorithm, espe...
متن کاملA New Hybrid Conjugate Gradient Method Based on Eigenvalue Analysis for Unconstrained Optimization Problems
In this paper, two extended three-term conjugate gradient methods based on the Liu-Storey ({tt LS}) conjugate gradient method are presented to solve unconstrained optimization problems. A remarkable property of the proposed methods is that the search direction always satisfies the sufficient descent condition independent of line search method, based on eigenvalue analysis. The globa...
متن کاملA Free Line Search Steepest Descent Method for Solving Unconstrained Optimization Problems
In this paper, we solve unconstrained optimization problem using a free line search steepest descent method. First, we propose a double parameter scaled quasi Newton formula for calculating an approximation of the Hessian matrix. The approximation obtained from this formula is a positive definite matrix that is satisfied in the standard secant relation. We also show that the largest eigen value...
متن کاملA Three-terms Conjugate Gradient Algorithm for Solving Large-Scale Systems of Nonlinear Equations
Nonlinear conjugate gradient method is well known in solving large-scale unconstrained optimization problems due to it’s low storage requirement and simple to implement. Research activities on it’s application to handle higher dimensional systems of nonlinear equations are just beginning. This paper presents a Threeterm Conjugate Gradient algorithm for solving Large-Scale systems of nonlinear e...
متن کاملروش به روز رسانی متقارن از مرتبه اول برای حل مسایل بهینه سازی مقیاس بزرگ
The search for finding the local minimization in unconstrained optimization problems and a fixed point of the gradient system of ordinary differential equations are two close problems. Limited-memory algorithms are widely used to solve large-scale problems, while Rang Kuta's methods are also used to solve numerical differential equations. In this paper, using the concept of sub-space method and...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2007